27 research outputs found

    Automatic generation of high speed elliptic curve cryptography code

    Get PDF
    Apparently, trust is a rare commodity when power, money or life itself are at stake. History is full of examples. Julius Caesar did not trust his generals, so that: ``If he had anything confidential to say, he wrote it in cipher, that is, by so changing the order of the letters of the alphabet, that not a word could be made out. If anyone wishes to decipher these, and get at their meaning, he must substitute the fourth letter of the alphabet, namely D, for A, and so with the others.'' And so the history of cryptography began moving its first steps. Nowadays, encryption has decayed from being an emperor's prerogative and became a daily life operation. Cryptography is pervasive, ubiquitous and, the best of all, completely transparent to the unaware user. Each time we buy something on the Internet we use it. Each time we search something on Google we use it. Everything without (almost) realizing that it silently protects our privacy and our secrets. Encryption is a very interesting instrument in the "toolbox of security" because it has very few side effects, at least on the user side. A particularly important one is the intrinsic slow down that its use imposes in the communications. High speed cryptography is very important for the Internet, where busy servers proliferate. Being faster is a double advantage: more throughput and less server overhead. In this context, however, the public key algorithms starts with a big handicap. They have very bad performances if compared to their symmetric counterparts. Due to this reason their use is often reduced to the essential operations, most notably key exchanges and digital signatures. The high speed public key cryptography challenge is a very practical topic with serious repercussions in our technocentric world. Using weak algorithms with a reduced key length to increase the performances of a system can lead to catastrophic results. In 1985, Miller and Koblitz independently proposed to use the group of rational points of an elliptic curve over a finite field to create an asymmetric algorithm. Elliptic Curve Cryptography (ECC) is based on a problem known as the ECDLP (Elliptic Curve Discrete Logarithm Problem) and offers several advantages with respect to other more traditional encryption systems such as RSA and DSA. The main benefit is that it requires smaller keys to provide the same security level since breaking the ECDLP is much harder. In addition, a good ECC implementation can be very efficient both in time and memory consumption, thus being a good candidate for performing high speed public key cryptography. Moreover, some elliptic curve based techniques are known to be extremely resilient to quantum computing attacks, such as the SIDH (Supersingular Isogeny Diffie-Hellman). Traditional elliptic curve cryptography implementations are optimized by hand taking into account the mathematical properties of the underlying algebraic structures, the target machine architecture and the compiler facilities. This process is time consuming, requires a high degree of expertise and, ultimately, error prone. This dissertation' ultimate goal is to automatize the whole optimization process of cryptographic code, with a special focus on ECC. The framework presented in this thesis is able to produce high speed cryptographic code by automatically choosing the best algorithms and applying a number of code-improving techniques inspired by the compiler theory. Its central component is a flexible and powerful compiler able to translate an algorithm written in a high level language and produce a highly optimized C code for a particular algebraic structure and hardware platform. The system is generic enough to accommodate a wide array of number theory related algorithms, however this document focuses only on optimizing primitives based on elliptic curves defined over binary fields

    Security at the Edge for Resource-Limited IoT Devices

    Get PDF
    The Internet of Things (IoT) is rapidly growing, with an estimated 14.4 billion active endpoints in 2022 and a forecast of approximately 30 billion connected devices by 2027. This proliferation of IoT devices has come with significant security challenges, including intrinsic security vulnerabilities, limited computing power, and the absence of timely security updates. Attacks leveraging such shortcomings could lead to severe consequences, including data breaches and potential disruptions to critical infrastructures. In response to these challenges, this research paper presents the IoT Proxy, a modular component designed to create a more resilient and secure IoT environment, especially in resource-limited scenarios. The core idea behind the IoT Proxy is to externalize security-related aspects of IoT devices by channeling their traffic through a secure network gateway equipped with different Virtual Network Security Functions (VNSFs). Our solution includes a Virtual Private Network (VPN) terminator and an Intrusion Prevention System (IPS) that uses a machine learning-based technique called oblivious authentication to identify connected devices. The IoT Proxy’s modular, scalable, and externalized security approach creates a more resilient and secure IoT environment, especially for resource-limited IoT devices. The promising experimental results from laboratory testing demonstrate the suitability of IoT Proxy to secure real-world IoT ecosystems

    Towards Automatic Risk Analysis and Mitigation of Software Applications

    Get PDF
    This paper proposes a novel semi-automatic risk analysis approach that not only identifies the threats against the assets in a software application, but it is also able to quantify their risks and to suggests the software protections to mitigate them. Built on a formal model of the software, attacks, protections and their relationships, our implementation has shown promising performance on real world applications. This work represents a first step towards a user-friendly expert system for the protection of software applications

    Encryption-agnostic classifiers of traffic originators and their application to anomaly detection

    Get PDF
    This paper presents an approach that leverages classical machine learning techniques to identify the tools from the packets sniffed, both for clear-text and encrypted traffic. This research aims to overcome the limitations to security monitoring systems posed by the widespread adoption of encrypted communications. By training three distinct classifiers, this paper shows that it is possible to detect, with excellent accuracy, the category of tools that generated the analyzed traffic (e.g., browsers vs. network stress tools), the actual tools (e.g., Firefox vs. Chrome vs. Edge), and the individual tool versions (e.g., Chrome 48 vs. Chrome 68). The paper provides hints that the classifiers are helpful for early detection of Distributed Denial of Service (DDoS) attacks, duplication of entire websites, and identification of sudden changes in users’ behavior, which might be the consequence of malware infection or data exfiltration

    Design, Implementation, and Automation of a Risk Management Approach for Man-at-the-End Software Protection

    Full text link
    The last years have seen an increase in Man-at-the-End (MATE) attacks against software applications, both in number and severity. However, software protection, which aims at mitigating MATE attacks, is dominated by fuzzy concepts and security-through-obscurity. This paper presents a rationale for adopting and standardizing the protection of software as a risk management process according to the NIST SP800-39 approach. We examine the relevant constructs, models, and methods needed for formalizing and automating the activities in this process in the context of MATE software protection. We highlight the open issues that the research community still has to address. We discuss the benefits that such an approach can bring to all stakeholders. In addition, we present a Proof of Concept (PoC) decision support system that instantiates many of the discussed construct, models, and methods and automates many activities in the risk analysis methodology for the protection of software. Despite being a prototype, the PoC's validation with industry experts indicated that several aspects of the proposed risk management process can already be formalized and automated with our existing toolbox and that it can actually assist decision-making in industrially relevant settings.Comment: Preprint submitted to Computers & Security. arXiv admin note: substantial text overlap with arXiv:2011.0726

    Toward a Complete Software Stack to Integrate Quantum Key Distribution in a Cloud Environment

    Get PDF
    The coming advent of Quantum Computing promises to jeopardize current communications security, undermining the effectiveness of traditional public-key based cryptography. Different strategies (Post-Quantum or Quantum Cryptography) have been proposed to address this problem. Many techniques and algorithms based on quantum phenomena have been presented in recent years; the most relevant example is the introduction of Quantum Key Distribution (QKD). This approach allows to exchange cryptographic keys among parties and does not suffer from the development of quantum computation. Problems arise when this technique has to be deployed and combined with modern distributed infrastructures that heavily depend on cloud and virtualisation paradigms. This paper addresses the issue by presenting a new software stack that effortlessly introduces QKD in such environments. This software stack allows for agnostic integration, monitoring, and management of QKD, independent from a specific vendor or technology. Furthermore, a QKD simulator is presented, designed, and tested. This latter contribution is suitable as a low-level testing device, as an independent software module to check QKD protocols, and as a testbed to identify future practical enhancements

    Detection of encrypted cryptomining malware connections with machine and deep learning

    Get PDF
    Nowadays, malware has become an epidemic problem. Among the attacks exploiting the computer resources of victims, one that has become usual is related to the massive amounts of computational resources needed for digital currency cryptomining. Cybercriminals steal computer resources from victims, associating these resources to the crypto-currency mining pools they benefit from. This research work focuses on offering a solution for detecting such abusive cryptomining activity, just by means of passive network monitoring. To this end, we identify a new set of highly relevant network flow features to be used jointly with a rich set of machine and deep-learning models for real-time cryptomining flow detection. We deployed a complex and realistic cryptomining scenario for training and testing machine and deep learning models, in which clients interact with real servers across the Internet and use encrypted connections. A complete set of experiments were carried out to demonstrate that, using a combination of these highly informative features with complex machine learning models, cryptomining attacks can be detected on the wire with telco-grade precision and accuracy, even if the traffic is encrypted
    corecore